47 research outputs found
Machine learning for radiation outcome modeling and prediction
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/155503/1/mp13570_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/155503/2/mp13570.pd
Predicting successful clinical candidates for fiducial-free lung tumor tracking with a deep learning binary classification model
Robotic radiosurgery allows for marker-less lung tumor tracking by detecting
tumor density variations in 2D orthogonal X-ray images. The ability to detect
and track a lung lesion depends on its size, density, and location, and has to
be evaluated on a case-by-case basis. The current method for identifying which
patient can be successfully treated with fiducial-free lung tumor tracking is a
time-consuming process named Lung Optimized Treatment (LOT) simulation. The
process involves CT acquisition, generation of a simulation plan, creation of
the patient breathing model, and execution of the simulation plan on the
treatment delivery platform.
The aim of the study is to develop a tool to enable binary classification of
trackable and non-trackable lung tumors for automatic selection of optimal
tracking methods for patient undergoing robotic radiosurgery without having to
perform the LOT simulation.
We developed a deep learning classification model and tested 5 different
network architectures to classify lung cancer lesions from enhanced digitally
reconstructed radiographs (DRRs) generated from planning CTs. This study
included 129 patients with single or multiple lesions, for a total of 144 lung
lesions (n=115 trackable, n=29 untrackable). A total of 271 images were
included in our analysis. We kept 80% of the images for training, 10% for
validation, and the remaining 10% for testing.
The binary classification accuracy reached 100% after training, both in the
validation and the test set.
Candidates for fiducial-free lung tumor tracking during robotic lung
radiosurgery can be successfully identified by using a deep learning model
classifying DRR images sourced from simulation CT scans.Comment: 19 pages, 7 figure
Recommended from our members
IMRT QA using machine learning: A multi-institutional validation.
PurposeTo validate a machine learning approach to Virtual intensity-modulated radiation therapy (IMRT) quality assurance (QA) for accurately predicting gamma passing rates using different measurement approaches at different institutions.MethodsA Virtual IMRT QA framework was previously developed using a machine learning algorithm based on 498 IMRT plans, in which QA measurements were performed using diode-array detectors and a 3%local/3 mm with 10% threshold at Institution 1. An independent set of 139 IMRT measurements from a different institution, Institution 2, with QA data based on portal dosimetry using the same gamma index, was used to test the mathematical framework. Only pixels with ≥10% of the maximum calibrated units (CU) or dose were included in the comparison. Plans were characterized by 90 different complexity metrics. A weighted poison regression with Lasso regularization was trained to predict passing rates using the complexity metrics as input.ResultsThe methodology predicted passing rates within 3% accuracy for all composite plans measured using diode-array detectors at Institution 1, and within 3.5% for 120 of 139 plans using portal dosimetry measurements performed on a per-beam basis at Institution 2. The remaining measurements (19) had large areas of low CU, where portal dosimetry has a larger disagreement with the calculated dose and as such, the failure was expected. These beams need further modeling in the treatment planning system to correct the under-response in low-dose regions. Important features selected by Lasso to predict gamma passing rates were as follows: complete irradiated area outline (CIAO), jaw position, fraction of MLC leafs with gaps smaller than 20 or 5 mm, fraction of area receiving less than 50% of the total CU, fraction of the area receiving dose from penumbra, weighted average irregularity factor, and duty cycle.ConclusionsWe have demonstrated that Virtual IMRT QA can predict passing rates using different measurement techniques and across multiple institutions. Prediction of QA passing rates can have profound implications on the current IMRT process
Recommended from our members
Exploratory analysis using machine learning to predict for chest wall pain in patients with stage I non-small-cell lung cancer treated with stereotactic body radiation therapy.
Background and purposeChest wall toxicity is observed after stereotactic body radiation therapy (SBRT) for peripherally located lung tumors. We utilize machine learning algorithms to identify toxicity predictors to develop dose-volume constraints.Materials and methodsTwenty-five patient, tumor, and dosimetric features were recorded for 197 consecutive patients with Stage I NSCLC treated with SBRT, 11 of whom (5.6%) developed CTCAEv4 grade ≥2 chest wall pain. Decision tree modeling was used to determine chest wall syndrome (CWS) thresholds for individual features. Significant features were determined using independent multivariate methods. These methods incorporate out-of-bag estimation using Random forests (RF) and bootstrapping (100 iterations) using decision trees.ResultsUnivariate analysis identified rib dose to 1 cc < 4000 cGy (P = 0.01), chest wall dose to 30 cc < 1900 cGy (P = 0.035), rib Dmax < 5100 cGy (P = 0.05) and lung dose to 1000 cc < 70 cGy (P = 0.039) to be statistically significant thresholds for avoiding CWS. Subsequent multivariate analysis confirmed the importance of rib dose to 1 cc, chest wall dose to 30 cc, and rib Dmax. Using learning-curve experiments, the dataset proved to be self-consistent and provides a realistic model for CWS analysis.ConclusionsUsing machine learning algorithms in this first of its kind study, we identify robust features and cutoffs predictive for the rare clinical event of CWS. Additional data in planned subsequent multicenter studies will help increase the accuracy of multivariate analysis
Radiosensitization of gliomas by intracellular generation of 5-fluorouracil potentiates prodrug activator gene therapy with a retroviral replicating vector.
A tumor-selective non-lytic retroviral replicating vector (RRV), Toca 511, and an extended-release formulation of 5-fluorocytosine (5-FC), Toca FC, are currently being evaluated in clinical trials in patients with recurrent high-grade glioma (NCT01156584, NCT01470794 and NCT01985256). Tumor-selective propagation of this RRV enables highly efficient transduction of glioma cells with cytosine deaminase (CD), which serves as a prodrug activator for conversion of the anti-fungal prodrug 5-FC to the anti-cancer drug 5-fluorouracil (5-FU) directly within the infected cells. We investigated whether, in addition to its direct cytotoxic effects, 5-FU generated intracellularly by RRV-mediated CD/5-FC prodrug activator gene therapy could also act as a radiosensitizing agent. Efficient transduction by RRV and expression of CD were confirmed in the highly aggressive, radioresistant human glioblastoma cell line U87EGFRvIII and its parental cell line U87MG (U87). RRV-transduced cells showed significant radiosensitization even after transient exposure to 5-FC. This was confirmed both in vitro by a clonogenic colony survival assay and in vivo by bioluminescence imaging analysis. These results provide a convincing rationale for development of tumor-targeted radiosensitization strategies utilizing the tumor-selective replicative capability of RRV, and incorporation of radiation therapy into future clinical trials evaluating Toca 511 and Toca FC in brain tumor patients
Recommended from our members
Building more accurate decision trees with the additive tree.
The expansion of machine learning to high-stakes application domains such as medicine, finance, and criminal justice, where making informed decisions requires clear understanding of the model, has increased the interest in interpretable machine learning. The widely used Classification and Regression Trees (CART) have played a major role in health sciences, due to their simple and intuitive explanation of predictions. Ensemble methods like gradient boosting can improve the accuracy of decision trees, but at the expense of the interpretability of the generated model. Additive models, such as those produced by gradient boosting, and full interaction models, such as CART, have been investigated largely in isolation. We show that these models exist along a spectrum, revealing previously unseen connections between these approaches. This paper introduces a rigorous formalization for the additive tree, an empirically validated learning technique for creating a single decision tree, and shows that this method can produce models equivalent to CART or gradient boosted stumps at the extremes by varying a single parameter. Although the additive tree is designed primarily to provide both the model interpretability and predictive performance needed for high-stakes applications like medicine, it also can produce decision trees represented by hybrid models between CART and boosted stumps that can outperform either of these approaches
Recommended from our members
Expert-augmented machine learning.
Machine learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption is limited by the level of trust afforded by given models. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of humans and machines. Here, we present expert-augmented machine learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We used a large dataset of intensive-care patient data to derive 126 decision rules that predict hospital mortality. Using an online platform, we asked 15 clinicians to assess the relative risk of the subpopulation defined by each rule compared to the total sample. We compared the clinician-assessed risk to the empirical risk and found that, while clinicians agreed with the data in most cases, there were notable exceptions where they overestimated or underestimated the true risk. Studying the rules with greatest disagreement, we identified problems with the training data, including one miscoded variable and one hidden confounder. Filtering the rules based on the extent of disagreement between clinician-assessed risk and empirical risk, we improved performance on out-of-sample data and were able to train with less data. EAML provides a platform for automated creation of problem-specific priors, which help build robust and dependable machine-learning models in critical applications